With the deregulation of power markets and the availability of financial risk management products, new operations research challenges have emerged. In “Optimal Economic Dispatch and Risk Management of Thermal Power Plants in Deregulated Markets,” M. Thompson introduces a new modeling and computational framework for handling many of these emerging challenges. The article integrates the dynamic optimization of power plant flexibility and operational constraints with models of the forward curves of power and gas needed for dynamic hedging. A particular focus of the article is on the pricing and management of the counter party credit risk of tolling agreements, a popular new financial product for power producers. Hydro storage is an important building block in smart grids that can cope with the growing number of intermittent, renewable power generators, as it is the only mature technology that can store electricity at a large scale. In a market environment, where electricity prices move with the fluctuating power produced by myriad wind turbines and solar panels, operating hydro storage resources efficiently requires that generating companies plan for uncertainty. In “Optimizing Trading Decisions for Hydro Storage Systems Using Approximate Dual Dynamic Programming,” N. Löhndorf, D. Wozabal, and S. Minner describe a stochastic model of hydro storage operation in an electricity market and a solution to the underlying optimization problem. They demonstrate their approach's efficiency in a real-world case study by solving the problem of a generating company that operates a number of connected hydro storage plants. Connectivity requirements are a common component of forest planning models, with important examples arising in wildlife habitat protection. In harvest-scheduling models, a typical constraint is that adjacent areas of harvested forest do not exceed a certain maximum area. Additionally, these models can address preservation concerns by requiring large contiguous patches of mature forest to be left standing after harvest. In “Imposing Connectivity Constraints in Forest Planning Models,” R. Carvajal, M. Constantino, M. Goycoolea, J. P. Vielma, and A. Weintraub present an integer programming methodology consisting of a node-cut type formulation and a cutting plane algorithm. The approach is flexible in terms of the types of requirements that can be modeled. Computational experiments on real medium-sized forest instances in a publicly available repository show that the formulation enjoys quick and tight linear programming bounds and can provide good-quality feasible solutions within reasonable computation time. How should a search engine design its online ad auctions to account for the fact that advertisers privately learn their conversion rates (probability of a click turning into a sale) over time? How should a manufacturer structure its supply contract to properly account for the fact that retailers dynamically learn the demand they face? To address these questions, S. Kakade, I. Lobel, and H. Nazerzadeh show in “Optimal Dynamic Mechanism Design and the Virtual-Pivot Mechanism” how to design profit-maximizing mechanisms for dynamic settings that satisfy a separability condition. The authors combine a dynamic version of VCG (the pivot mechanism) with Myerson's virtual value method to create a dynamic profit-maximizing mechanism with strong incentive compatibility guarantees. They also discuss particular cases, such as multiarmed bandit problems, where the virtual-pivot mechanism generates especially simple optimal dynamic mechanisms. Sponsored search advertising (SSA) has emerged as one of the most important forms of online advertising. Users' search queries convey significant information about their current needs and context, allowing search engines to better target ads to users than in other forms of online advertising. Although this form of advertising is effective, SSA auctions tend to be complicated and, as a result, advertisers often use naïve policies to select bids. In “Optimal Bidding in Multi-Item Multislot Sponsored Search Auctions,” V. Abhishek and K. Hosanagar model different aspects of SSA and recommend a bidding policy that advertisers can use to optimize bids for a portfolio of keywords. They also outline a policy that incorporates interactions between keywords. The authors test the performance of these policies through a field experiment and find that these techniques are effective in practice. Portfolio optimization is known for its sensitivity to estimation error and model assumptions. The stochastic nature of the financial market brings even more uncertainty into the setting. This is one of the major criticisms preventing translation of the theoretical foundation into a viable portfolio construction algorithm. In “Robust Portfolio Control with Stochastic Factor Dynamics,” P. Glasserman and X. Xu formulate a stochastic robustness approach using relative entropy to capture more general uncertainty than the traditional parameter approach. They obtain a closed-form solution for the multiperiod portfolio control problem in a model in which returns are driven by factors that evolve stochastically. By applying the robustness strategy to a commodity portfolio, they obtain significant performance improvement in out-of-sample tests. In finance and economics, jump-diffusion processes are widely used to model the behavior of prices, rates, and other variables of interest. Monte Carlo simulation is an important tool for addressing the pricing, risk management, and inference problems arising in this context. In “Exact Sampling of Jump Diffusions,” K. Giesecke and D. Smelov develop a simulation method for generating exact samples of jump diffusions with state-dependent drift, volatility, jump intensity, and jump size. The method leads to unbiased simulation estimators of security prices, transition densities, hitting probabilities, and other quantities. Numerical tests illustrate these estimators' advantages over discretization-based estimators, which suffer from discretization bias. Increasingly stringent environmental regulations are urging companies to cut emissions from their business operations. Energy-intensive manufacturers, whose major source of emissions is the production process, in particular, must reconsider their production planning and technology choice strategies to comply with the regulations. In “Optimal Production Planning with Emissions Trading,” X. Gong and S. X. Zhou develop a dynamic production model in which a manufacturer chooses a technology and decides the corresponding production quantity to meet random customer demands. Regulated under the emissions trading scheme, the manufacturer must hold a sufficient number of emission allowances to cover its emissions but can also buy or sell allowances via forward contracts with stochastic trading prices. The authors characterize the optimal emissions trading and production policies that minimize the firm's expected total discounted cost. Using representative data from the cement industry, they demonstrate that the green technology could significantly reduce costs and emissions. In pricing and revenue management research and practice, determining whether the total revenue function is unimodal in price can help identify the revenue-maximizing price. Characterizing properties of the generalized failure rate of a product's willingness-to-pay distribution lets us assess the unimodality of the revenue function and the uniqueness of the optimal price. In “New Results Concerning Probability Distributions with Increasing Generalized Failure Rates,” M. Banciu and P. Mirchandani extend our understanding of the generalized failure rates associated with probability distributions. Notably, they define the generalized failure rate function for discrete distributions, and they compare its properties to those relating to the generalized failure rate for continuous distributions. They also analyze and catalog the prevalence of the generalized failure rate property for all common continuous and discrete distributions, thus simplifying the task of checking whether a valuation distribution results in a unimodal revenue function. Savage's subjective expected utility is the most widely used model for representing preferences under uncertainty (that is, when objective probabilities of events may not be known). In “A Simple Behavioral Characterization of Subjective Expected Utility,” P. Blavatskyy presents a new behavioral characterization of subjective expected utility. The author shows that Wakker's trade-off consistency condition can be weakened to a simpler axiom, referred to as standard sequence invariance. This result is derived both in the connected topology approach and the algebraic approach (when step-continuity is replaced with solvability and Archimedean axioms). In “Supermodularity and Affine Policies in Dynamic Robust Optimization,” D. A. Iancu, M. Sharma, and M. Sviridenko seek to bridge two well-established paradigms for solving dynamic robust optimization problems: dynamic programming (DP) and decision rules—that is, policies parameterized in model uncertainties, which are obtained by restricting attention to particular functional forms, and solving tractable convex optimization problems. The authors characterize a set of sufficient conditions on the DP value functions and the uncertainty sets, which ensure that the class of affine decision rules is optimal. These conditions are readily satisfied in several applications arising in inventory and capacity planning under convex costs, which allows for finding all optimal (static and dynamic) decisions by solving a single linear or mixed integer program of small size. The results emphasize the interplay between the convexity and supermodularity of the value functions and the lattice structure of the uncertainty sets, suggesting new modeling paradigms for robust optimization. Countries that depend on electricity generated from hydroelectric reservoirs must prudently store reservoir water to avoid shortages and to minimize the cost of additional generation (thermal). This leads to a multistage stochastic linear programming problem called the hydrothermal scheduling problem. In “On Solving Multistage Stochastic Programs with Coherent Risk Measures,” A. Philpott, V. de Matos, and E. Finardi show how to modify the stochastic dual dynamic programming algorithm for solving multistage stochastic linear programs to solve problems when the planner is endowed with a dynamic coherent risk measure. They also demonstrate how to use inner and outer approximation to compute bounds on the risk-adjusted value of candidate policies that can be used to indicate how close these policies are to optimality. They test the methodology on a hydrothermal scheduling model of the Brazilian power system with a risk measure combining expected cost and average value at risk. Stochastic dynamic games with strategic complementarities are used to model a variety of situations such as network security models, dynamic search in markets, and oligopoly models. The standard solution concept for such games is Markov perfect equilibrium (MPE). However, computational complexity associated with the computation of MPE has limited the applicability of such games. In “Mean Field Equilibrium in Dynamic Games with Strategic Complementarities,” S. Adlakha and R. Johari present an approximation methodology based on the idea that each player reacts to other players' long-run average state. This approximation, called mean field equilibrium, reduces the computational complexity associated with studying stochastic dynamic games with complementarities. The authors provide a range of results that illustrate the value of this approach, including existence theorems, computational methods, and applications. Multicriteria optimization problems with stochastic benchmarking constraints have recently received significant interest. These problems involve decisions leading to multiple random outcomes, which can be viewed as random vectors. A benchmarking constraint enforces the requirement that the random vectors associated with our decision are preferable to certain benchmark vectors. A considerable body of recent literature focuses on preference relations among random vectors based on second-order stochastic dominance (SSD). However, SSD-based constraints are often overly restrictive and can lead to infeasible formulations. In “Optimization with Multivariate Conditional Value-at-Risk Constraints,” N. Noyan and G. Rudolf introduce a class of multivariate risk preferences based on the conditional value-at-risk (CVaR) measure that provides natural and flexible relaxations of SSD-based constraints. They develop duality results for optimization problems with multivariate CVaR constraints, along with a cut generation-based solution method. They extend the methodology to a wide class of coherent risk measures, including spectral measures. Highly customized products are typically managed through make-to-order policies that attempt to minimize inventory levels and have responsive lead-times to satisfy customers' orders. The diversity of product types makes predicting demand in advance challenging, and it could lead to waste of material or deteriorated service levels. Moreover, typical cost structures include many setup costs that make the balance between cost and service levels even more challenging. In “Online Make-to-Order Joint Replenishment Model: Primal-Dual Competitive Algorithms,” N. Buchbinder, T. Kimbrel, R. Levi, K. Makarychev, and M. Sviridenko study a make-to-order variant of one of the core cost structures commonly found in practical settings. They develop robust policies that do not require information about future customers but provide robust solutions that perform well against future demand scenarios. Theoretical analysis and extensive computational experiments establish the policies' performance. Little's Law, the fundamental queueing principle, relates limits of averages and expected values of stationary distributions. In practice, however, we often consider averages from observations over finite time intervals, for which Little's Law no longer applies exactly. It has been observed that the definitions can be modified so that the Little's Law relation among the finite averages remains valid. However, in “Statistical Analysis with Little's Law,” S.-H. Kim and W. Whitt advocate not modifying the definitions. Instead they advocate a statistical approach, estimating confidence intervals and considering modified estimators that reduce bias. They also suggest using supporting simulation models to confirm the method's effectiveness. They illustrate the approach using data from a U.S. bank call center. Inventory holding cost is often a major cost component of production processes. Adjusting production levels in accordance with the current and anticipated demand is an important element in managerial decision making. In “Optimal Production Management When Demand Depends on the Business Cycle,” A. Cadenillas, P. Lakner, and M. Pinedo assume that the economy alternates between recessionary and expansionary periods of random length. The authors consider two models for determining the optimal production level. In the first model, the optimal production level is given explicitly, based on the current inventory level and the current state of the economy. In the second model, the optimal production level depends on the current inventory level and on the estimated state of the economy based on past and present demand data.